Students Details:

Imports

General Definitions

Importing Data

VGG16

Training And Validation Functions

Our VGG16 Model

Graphs & Samples

Loss Accuracy

Pretrained Models

VGG16

Graphs & Samples

Loss Accuracy

MobileNet

Graphs & Samples

Loss Accuracy

GoogleNet

Graphs & Samples

Loss Accuracy

ResNet

Graphs & Samples

Loss Accuracy

DenseNet

Graphs & Samples

Loss Accuracy

Model Comparison

Training

Loss Accuracy

Validation

Loss Accuracy

Ensemble

Reduce in the batch size is required because of Colab's GPU's limitations.

Graphs & Samples

Loss Accuracy

Summary

In this Summarize Excercise, we used the faces dataset that we created at the previous exercise in order to train several face recognition models.
We have implemented our own VGG16 model and as we can see from the results, the model doesn't train well.
Moreover, we have performed transfer learning from 5 different pre-trained torchvision models. All the models were pre-trained on an 'ImageNet' dataset which contains 1,000 classes and more than 1,000,000 images.
Then, we used an ensemble model that combined all of those 5 model's last layers on purpose to get better predictions.

Results

First, we compared our VGG16 model to a pre-trained model and as we can see from the results, the pre-trained model gave us slightly better results, but still not satisfactory results.
Except VGG16, we trained MobileNet, GoogleNet, ResNet and DesnseNet models.
From the pre-trained models, we get the best accuracy with DenseNet.
Except for the VGG16 model, all of the other models gave us similar results.